The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
translated by 谷歌翻译
Deep convolutional neural networks (CNNs) have been widely used for medical image segmentation. In most studies, only the output layer is exploited to compute the final segmentation results and the hidden representations of the deep learned features have not been well understood. In this paper, we propose a prototype segmentation (ProtoSeg) method to compute a binary segmentation map based on deep features. We measure the segmentation abilities of the features by computing the Dice between the feature segmentation map and ground-truth, named as the segmentation ability score (SA score for short). The corresponding SA score can quantify the segmentation abilities of deep features in different layers and units to understand the deep neural networks for segmentation. In addition, our method can provide a mean SA score which can give a performance estimation of the output on the test images without ground-truth. Finally, we use the proposed ProtoSeg method to compute the segmentation map directly on input images to further understand the segmentation ability of each input image. Results are presented on segmenting tumors in brain MRI, lesions in skin images, COVID-related abnormality in CT images, prostate segmentation in abdominal MRI, and pancreatic mass segmentation in CT images. Our method can provide new insights for interpreting and explainable AI systems for medical image segmentation. Our code is available on: \url{https://github.com/shengfly/ProtoSeg}.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
这项工作提出了专门针对粒子探测器的低潜伏期图神经网络(GNN)设计的新型可重构体系结构。加速粒子探测器的GNN是具有挑战性的,因为它需要次微秒延迟才能在CERN大型强子撞机实验的级别1触发器中部署网络以进行在线事件选择。本文提出了一种自定义代码转换,并在基于互动网络的GNN中使用完全连接的图表中的矩阵乘法操作降低了强度,从而避免了昂贵的乘法。它利用了稀疏模式以及二进制邻接矩阵,并避免了不规则的内存访问,从而降低了延迟和硬件效率的提高。此外,我们引入了一种基于外部产品的基质乘法方法,该方法通过降低潜伏期设计的强度降低来增强。此外,引入了融合步骤,以进一步降低设计延迟。此外,提出了GNN特异性算法 - 硬件共同设计方法,该方法不仅找到了具有更好延迟的设计,而且在给定的延迟约束下发现了高精度的设计。最后,已经设计和开源了此低延迟GNN硬件体系结构的可自定义模板,该模板可以使用高级合成工具来生成低延迟的FPGA设计,并有效地利用资源。评估结果表明,我们的FPGA实施速度高24倍,并且消耗的功率比GPU实施少45倍。与我们以前的FPGA实施相比,这项工作的延迟降低了6.51至16.7倍。此外,我们的FPGA设计的延迟足以使GNN在亚微秒,实时撞机触发器系统中部署,从而使其能够从提高的精度中受益。
translated by 谷歌翻译
二维超声心动图中的标准视图已经建立了良好,但是获得的图像的质量高度依赖于操作员的技能,并进行了主观评估。这项研究旨在通过定义一组新的特定领域质量指标来为超声心动图图像质量提供客观评估管道。因此,可以自动化图像质量评估以增强临床测量,解释和实时优化。我们开发了深层神经网络,用于对超声心动图框架的自动评估,这些评估是从11,262名成年患者中随机采样的。私有超声心动图数据集由33,784帧组成,以前在2010年至2020年之间获得。深度学习方法被用来提取时空特征,并根据平均绝对误差评估了图像质量指标。我们的质量指标涵盖了解剖学和病理元素,以分别提供解剖学可见性,清晰度,深度增益和预先理解性的多元评估评分。
translated by 谷歌翻译
超声心动图图像质量评估不是经胸检查中的微不足道问题。随着对心脏结构的体内检查在心脏诊断方面的突出性,已经确认,准确诊断左心室功能取决于回声图像的质量。到目前为止,回声图像的视觉评估是高度主观的,需要在临床病理下进行特定的定义。尽管质量较差的图像损害了量化和诊断,但超声心动图图像质量标准的固有变化表明,在临床试验下,在临床试验下,尤其是在经验不足的心脏病学家下,在不同观察者之间面临的复杂性,并提供了明显的证据。在这项研究中,我们的目的是分析和定义专家主要讨论的特定质量属性,并提出一个完全训练的卷积神经网络模型,以客观地评估此类质量功能。
translated by 谷歌翻译
血氧水平依赖性(BOLD)用母体高氧可以评估胎盘内的氧运输,并已成为研究胎盘功能的有前途的工具。测量信号随着时间的变化需要在时间序列的每个体积中分割胎盘。由于大胆的时间序列中的数量大量,现有研究依靠注册将所有卷映射到手动分段模板。由于胎盘由于胎儿运动,母体运动和收缩而导致大变形,因此这种方法通常会导致大量废弃体积,而注册方法失败。在这项工作中,我们提出了一个基于U-NET神经网络体系结构的机器学习模型,以自动以粗体MRI分割胎盘,并将其应用于时间序列中的每个卷。我们使用边界加权损失函数来准确捕获胎盘形状。我们的模型经过训练和测试,并在91位包含健康胎儿的受试者,胎儿生长限制的胎儿以及BMI高的母亲中进行了测试。当与地面真实标签匹配时,我们的骰子得分为0.83 +/- 0.04,并且我们的模型在粗体时间序列中可靠地分割量氧和高氧点的量。我们的代码和训练有素的模型可在https://github.com/mabulnaga/automatic-placenta-mentegation上获得。
translated by 谷歌翻译
电缆在许多环境中无处不在,但容易出现自我闭合和结,使它们难以感知和操纵。挑战通常会随着电缆长度而增加:长电缆需要更复杂的松弛管理和策略,以促进可观察性和可及性。在本文中,我们专注于使用双边机器人自动弄清长达3米的电缆。我们开发了新的运动原语,以有效地解开长电缆和专门用于此任务的新型Gripper Jaws。我们提出了缠结操作(SGTM)的滑动和抓握,该算法将这些原始物与RGBD视觉构成迭代性毫无障碍。SGTM在隔离的外手上取消了67%的成功率,图8节和更复杂的配置上的50%。可以在https://sites.google.com/view/rss-2022-untangling/home上找到补充材料,可视化和视频。
translated by 谷歌翻译
模拟到现实的转移已成为一种流行且非常成功的方法,用于培训各种任务的机器人控制政策。但是,确定在模拟中训练的政策何时准备将其转移到物理世界通常是一个挑战。部署经过很少的模拟数据训练的策略可能会导致物理硬件的不可靠和危险行为。另一方面,模拟中的过度训练会导致策略过度拟合模拟器的视觉外观和动力学。在这项工作中,我们研究了自动确定在模拟中训练的策略何时可以可靠地转移到物理机器人的策略。我们在机器人织物操纵的背景下专门研究了这些思想,因为成功建模织物的动力学和视觉外观的困难,成功的SIM2Real转移尤其具有挑战性。导致织物平滑任务表明我们的切换标准与实际的性能很好地相关。特别是,我们基于信心的切换标准在培训总预算的55-60%之内达到了87.2-93.7%的平均最终面料覆盖率。有关代码和补充材料,请参见https://tinyurl.com/lsc-case。
translated by 谷歌翻译
世界各地的隐私法律和法规的景观是复杂而不断变化的。国家和超国家法律,协议,法令和其他政府发行的规则构成了公司必须遵循的拼凑而成才能在国际上进行运作。为了检查该拼凑而成的状态和演变,我们介绍了1,043条隐私法,法规和准则的政府隐私指示语料库或GPI语料库,涵盖了182个司法管辖区。该语料库可以对法律焦点进行大规模定量和定性检查。我们检查了创建GPI的时间分布,并说明了过去50年中隐私立法的急剧增加,尽管较细粒度的检查表明,增加的速度取决于GPIS所说的个人数据类型。我们的探索还表明,大多数隐私法分别解决了相对较少的个人数据类型,这表明全面的隐私立法仍然很少见。此外,主题建模结果显示了GPI中常见主题的普遍性,例如财务,医疗保健和电信。最后,我们将语料库释放到研究界,以促进进一步的研究。
translated by 谷歌翻译